Let's draw some maps. 🗺🧐
Let's start with altair. When your dataset is large, it is nice to enable a json data transformer. What it does is, instead of generating and holding the whole dataset in the memory, transform the dataset and save into a temporary file. This makes the whole plotting process much more efficient. For more information, check out: https://altair-viz.github.io/user_guide/data_transformers.html
import altair as alt
# saving data into a file rather than embedding into the chart
alt.data_transformers.enable('json')
#alt.renderers.enable('notebook')
# alt.renderers.enable('jupyterlab')
alt.renderers.enable('default')
RendererRegistry.enable('default')
Maybe we need a dataset with geographical coordinates. This zipcodes dataset contains the location and zipcode of each zip code area.
from vega_datasets import data
zipcodes_url = data.zipcodes.url
zipcodes = data.zipcodes()
zipcodes.head()
| zip_code | latitude | longitude | city | state | county | |
|---|---|---|---|---|---|---|
| 0 | 00501 | 40.922326 | -72.637078 | Holtsville | NY | Suffolk |
| 1 | 00544 | 40.922326 | -72.637078 | Holtsville | NY | Suffolk |
| 2 | 00601 | 18.165273 | -66.722583 | Adjuntas | PR | Adjuntas |
| 3 | 00602 | 18.393103 | -67.180953 | Aguada | PR | Aguada |
| 4 | 00603 | 18.455913 | -67.145780 | Aguadilla | PR | Aguadilla |
import pandas as pd
zipcodes_url
zipcodesDF = pd.read_csv(zipcodes_url)
zipcodesDF.head()
| zip_code | latitude | longitude | city | state | county | |
|---|---|---|---|---|---|---|
| 0 | 501 | 40.922326 | -72.637078 | Holtsville | NY | Suffolk |
| 1 | 544 | 40.922326 | -72.637078 | Holtsville | NY | Suffolk |
| 2 | 601 | 18.165273 | -66.722583 | Adjuntas | PR | Adjuntas |
| 3 | 602 | 18.393103 | -67.180953 | Aguada | PR | Aguada |
| 4 | 603 | 18.455913 | -67.145780 | Aguadilla | PR | Aguadilla |
zipcodes = data.zipcodes(dtype={'zip_code': 'category'})
zipcodes.head()
| zip_code | latitude | longitude | city | state | county | |
|---|---|---|---|---|---|---|
| 0 | 00501 | 40.922326 | -72.637078 | Holtsville | NY | Suffolk |
| 1 | 00544 | 40.922326 | -72.637078 | Holtsville | NY | Suffolk |
| 2 | 00601 | 18.165273 | -66.722583 | Adjuntas | PR | Adjuntas |
| 3 | 00602 | 18.393103 | -67.180953 | Aguada | PR | Aguada |
| 4 | 00603 | 18.455913 | -67.145780 | Aguadilla | PR | Aguadilla |
zipcodes.zip_code.dtype
CategoricalDtype(categories=['00501', '00544', '00601', '00602', '00603', '00604',
'00605', '00606', '00610', '00611',
...
'99919', '99921', '99922', '99923', '99925', '99926',
'99927', '99928', '99929', '99950'],
, ordered=False)
Btw, you'll have fewer issues if you pass URL instead of a dataframe to alt.Chart.
Now we have the dataset loaded and start drawing some plots. Let's say you don't know anything about map projections. What would you try with geographical data? Probably the simplest way is considering (longitude, latitude) as a Cartesian coordinate and directly plot them.
alt.Chart(zipcodes_url).mark_circle().encode(
x='longitude:Q',
y='latitude:Q',
)
Actually this itself is a map projection called Equirectangular projection. This projection (or almost a non-projection) is super straight-forward and doesn't require any processing of the data. So, often it is used to just quickly explore geographical data. As you dig deeper, you still want to think about which map projection fits your need best. Don't just use equirectangular projection without any thoughts!
Anyway, let's make it look slighly better by reducing the size of the circles and adjusting the aspect ratio.
Q: Can you adjust the circle size, width and height of the chart?
# Implement
alt.Chart(zipcodes_url).mark_circle(size=2).encode(
x='longitude:Q',
y='latitude:Q',
).properties(
width=650,
height=200
)
But, a much better way to do this is explicitly specifying that they are lat, lng coordinates by using longitude= and latitude=, rather than x= and y=. If you do that, altair automatically adjust the aspect ratio.
Q: Can you try it?
# Implement
alt.Chart(zipcodes_url).mark_circle(size=2).encode(
longitude='longitude:Q',
latitude='latitude:Q',
).properties(
width=650,
height=200
)
Because the American empire is far-reaching and complicated, the information density of this map is very low (although interesting). A common projection for visualizing US data is AlbersUSA, which uses Albers (equal-area) projection. This is a standard projection used in United States Geological Survey and the United States Census Bureau. Albers USA contains a composition of US main land, Alaska, and Hawaii.
To use it, we call project method and specify which variables are longitude and latitude.
Q: use the project method to draw the map in the AlbersUsa projection.
# Implement
alt.Chart(zipcodes_url).mark_circle(size=3).encode(
longitude='longitude:Q',
latitude='latitude:Q'
).project(
type='albersUsa'
).properties(
width=650,
height=400
)
Now we're talking. 😎
Let's visualize the large-scale zipcode patterns. We can use the fact that the zipcodes are hierarchically organized. That is, the first digit captures the largest area divisions and the other digits are about smaller geographical divisions.
Altair provides some data transformation functionalities. One of them is extracting a substring from a variable.
from altair.expr import datum, substring
alt.Chart(zipcodes_url).mark_circle(size=2).transform_calculate(
'first_digit', substring(datum.zip_code, 0, 1)
).encode(
longitude='longitude:Q',
latitude='latitude:Q',
color='first_digit:N',
).project(
type='albersUsa'
).properties(
width=700,
height=400,
)
For each row (datum), you obtain the zip_code variable and get the substring (imagine Python list slicing), and then you call the result first_digit. Now, you can use this first_digit variable to color the circles. Also note that we specify first_digit as a nominal variable, not quantitative, to obtain a categorical colormap. But we can also play with it too.
Q: Why don't you extract the first two digits, name it as two_digits, and declare that as a quantitative variable? Any interesting patterns? What does it tell us about the history of US?
# Implement
alt.Chart(zipcodes_url).mark_circle(size=2).transform_calculate(
'two_digits', substring(datum.zip_code, 0, 2)
).encode(
longitude='longitude:Q',
latitude='latitude:Q',
color='two_digits:Q',
).project(
type='albersUsa'
).properties(
width=700,
height=400,
)
The history of the pincodes is interesting as the numbers keep on increasing as we go from east coast to the west. The first 2 digits of the code depicts wether the code is for the east or west for instance the Eastern states such as Maine and New York begin with 0 or 1, whereas the Western states of California and Washington begin with a 9. The second two digits in the code determine a smaller region within each initial area that translates to a central post office facility for that area. The final two digits signify the local post office of the address
Also we can see a scarcity of points [pincodes] in the mountain zone of the country as they are less inhabitable and thus lesser connectivity of postal offices
Q: also try it with declaring the first two digits as a categorical variable
# Implement
# Implement
alt.Chart(zipcodes_url).mark_circle(size=2).transform_calculate(
'two_digits', substring(datum.zip_code, 0, 2)
).encode(
longitude='longitude:Q',
latitude='latitude:Q',
color='two_digits:N',
).project(
type='albersUsa'
).properties(
width=700,
height=400,
)
Btw, you can always click "view source" or "open in Vega Editor" to look at the json object that defines this visualization. You can embed this json object on your webpage and easily put up an interactive visualization.
Q: Can you put a tooltip that displays the zipcode when you mouse-over? Example https://altair-viz.github.io/gallery/scatter_tooltips.html
# Implement
alt.Chart(zipcodes_url).mark_circle(size=2).transform_calculate(
'first_digit', substring(datum.zip_code, 0, 1)
).encode(
longitude='longitude:Q',
latitude='latitude:Q',
color='first_digit:N',
tooltip='zip_code:N'
).project(
type='albersUsa'
).properties(
width=700,
height=400,
)
Let's try some choropleth now. Vega datasets have US county / state boundary data (us_10m) and world country boundary data (world-110m). You can take a look at the boundaries on GitHub (they renders topoJSON files):
If you click "Raw" then you can take a look at the actual file, which is hard to read.
Essentially, each file is a large dictionary with the following keys.
usmap = data.us_10m()
usmap.keys()
dict_keys(['type', 'transform', 'objects', 'arcs'])
usmap['type']
'Topology'
usmap['transform']
{'scale': [0.003589294092944858, 0.0005371535195261037],
'translate': [-179.1473400003406, 17.67439566600018]}
This transformation is used to quantize the data and store the coordinates in integer (easier to store than float type numbers).
https://github.com/topojson/topojson-specification#212-transforms
usmap['objects'].keys()
dict_keys(['counties', 'states', 'land'])
This data contains not only county-level boundaries (objects) but also states and land boundaries.
usmap['objects']['land']['type'], usmap['objects']['states']['type'], usmap['objects']['counties']['type']
('MultiPolygon', 'GeometryCollection', 'GeometryCollection')
land is a multipolygon (one object) and states and counties contains many geometrics (multipolygons) because there are many states (counties). We can look at a state as a set of arcs that define it. It's id captures the identity of the state and is the key to link to other datasets.
state1 = usmap['objects']['states']['geometries'][1]
state1
{'type': 'MultiPolygon',
'arcs': [[[10337]],
[[10342]],
[[10341]],
[[10343]],
[[10834, 10340]],
[[10344]],
[[10345]],
[[10338]]],
'id': 15}
The arcs referred here is defined in usmap['arcs'].
usmap['arcs'][:10]
[[[15739, 57220], [0, 0]], [[15739, 57220], [29, 62], [47, -273]], [[15815, 57009], [-6, -86]], [[15809, 56923], [0, 0]], [[15809, 56923], [-36, -8], [6, -210], [32, 178]], [[15811, 56883], [9, -194], [44, -176], [-29, -151], [-24, -319]], [[15811, 56043], [-12, -216], [26, -171]], [[15825, 55656], [-2, 1]], [[15823, 55657], [-19, 10], [26, -424], [-26, -52]], [[15804, 55191], [-30, -72], [-47, -344]]]
It seems pretty daunting to work with this dataset, right? But fortunately people have already built tools to handle such data.
# states
states = alt.topo_feature(data.us_10m.url, 'states')
# us counties
us_counties = alt.topo_feature(data.us_10m.url, 'counties')
states
UrlData({
format: TopoDataFormat({
feature: 'states',
type: 'topojson'
}),
url: 'https://cdn.jsdelivr.net/npm/vega-datasets@v1.29.0/data/us-10m.json'
})
Q. Can you find a mark for geographical shapes from here https://altair-viz.github.io/user_guide/marks.html and draw the states?
# Implement
alt.Chart(states).mark_geoshape().encode()
And then project it using the albersUsa?
# Implement
alt.Chart(states).mark_geoshape().encode().project(
type='albersUsa'
)
Can you do the same thing with counties and draw county boundaries? (hint: you have to use alt.topo_feature())
# Implement
alt.Chart(us_counties).mark_geoshape().encode().project(
type='albersUsa'
)
Let's load some county-level unemployment data.
unemp_data = data.unemployment(sep='\t')
unemp_data.head()
| id | rate | |
|---|---|---|
| 0 | 1001 | 0.097 |
| 1 | 1003 | 0.091 |
| 2 | 1005 | 0.134 |
| 3 | 1007 | 0.121 |
| 4 | 1009 | 0.099 |
This dataset has unemployment rate. When? I don't know. We don't care about data provenance here because the goal is quickly trying out choropleth. But if you're working with a real dataset, you should be very sensitive about the provenance of your dataset. Make sure you understand where the data came from and how it was processed.
Anyway, for each county specified with id. To combine two datasets, we use "Lookup transform" - https://vega.github.io/vega/docs/transforms/lookup/. Essentially, we use the id in the map data to look up (again) id field in the unemp_data and then bring in the rate variable. Then, we can use that rate variable to encode the color of the geoshape mark.
alt.Chart(us_counties).mark_geoshape().project(
type='albersUsa'
).transform_lookup(
lookup='id',
from_=alt.LookupData(data.unemployment.url, 'id', ['rate'])
).encode(
color='rate:Q'
).properties(
width=700,
height=400
)
There you have it, a nice choropleth map. 😎
Although many geovisualizations use vector graphics, raster visualization is still useful especially when you deal with images and lots of datapoints. Datashader is a package that aggregates and visualizes a large amount of data very quickly. Given a scene (visualization boundary, resolution, etc.), it quickly aggregate the data and produce pixels and send them to you.
To appreciate its power, we need a fairly large dataset. Let's use NYC taxi trip dataset on Kaggle: https://www.kaggle.com/kentonnlp/2014-new-york-city-taxi-trips You can download even bigger trip data from NYC open data website: https://opendata.cityofnewyork.us/data/
Ah, and you want to install the datashader, bokeh, and holoviews first if you don't have them yet. If you have them make sure they are the latest version
pip install -U datashader bokeh holoviews
or
conda install datashader bokeh holoviews
!pip install -U datashader bokeh holoviews
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting datashader
Downloading datashader-0.14.3-py2.py3-none-any.whl (18.2 MB)
|████████████████████████████████| 18.2 MB 6.1 MB/s
Requirement already satisfied: bokeh in /usr/local/lib/python3.7/dist-packages (2.3.3)
Collecting bokeh
Downloading bokeh-2.4.3-py3-none-any.whl (18.5 MB)
|████████████████████████████████| 18.5 MB 1.2 MB/s
Requirement already satisfied: holoviews in /usr/local/lib/python3.7/dist-packages (1.14.9)
Collecting holoviews
Downloading holoviews-1.15.2-py2.py3-none-any.whl (4.3 MB)
|████████████████████████████████| 4.3 MB 63.3 MB/s
Collecting datashape
Downloading datashape-0.5.2.tar.gz (76 kB)
|████████████████████████████████| 76 kB 4.3 MB/s
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from datashader) (1.3.5)
Requirement already satisfied: colorcet in /usr/local/lib/python3.7/dist-packages (from datashader) (3.0.1)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from datashader) (2.23.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from datashader) (1.21.6)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from datashader) (1.7.3)
Requirement already satisfied: pyct in /usr/local/lib/python3.7/dist-packages (from datashader) (0.4.8)
Requirement already satisfied: numba>=0.51 in /usr/local/lib/python3.7/dist-packages (from datashader) (0.56.4)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from datashader) (7.1.2)
Requirement already satisfied: param in /usr/local/lib/python3.7/dist-packages (from datashader) (1.12.2)
Requirement already satisfied: xarray in /usr/local/lib/python3.7/dist-packages (from datashader) (0.20.2)
Requirement already satisfied: dask in /usr/local/lib/python3.7/dist-packages (from datashader) (2022.2.0)
Requirement already satisfied: toolz in /usr/local/lib/python3.7/dist-packages (from datashader) (0.12.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from numba>=0.51->datashader) (4.13.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba>=0.51->datashader) (57.4.0)
Requirement already satisfied: llvmlite<0.40,>=0.39.0dev0 in /usr/local/lib/python3.7/dist-packages (from numba>=0.51->datashader) (0.39.1)
Requirement already satisfied: tornado>=5.1 in /usr/local/lib/python3.7/dist-packages (from bokeh) (6.0.4)
Requirement already satisfied: typing-extensions>=3.10.0 in /usr/local/lib/python3.7/dist-packages (from bokeh) (4.1.1)
Requirement already satisfied: packaging>=16.8 in /usr/local/lib/python3.7/dist-packages (from bokeh) (21.3)
Requirement already satisfied: PyYAML>=3.10 in /usr/local/lib/python3.7/dist-packages (from bokeh) (6.0)
Requirement already satisfied: Jinja2>=2.9 in /usr/local/lib/python3.7/dist-packages (from bokeh) (2.11.3)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from Jinja2>=2.9->bokeh) (2.0.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=16.8->bokeh) (3.0.9)
Collecting panel>=0.13.1
Downloading panel-0.14.1-py2.py3-none-any.whl (17.3 MB)
|████████████████████████████████| 17.3 MB 16.5 MB/s
Requirement already satisfied: pyviz-comms>=0.7.4 in /usr/local/lib/python3.7/dist-packages (from holoviews) (2.2.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->datashader) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas->datashader) (2022.6)
Requirement already satisfied: markdown in /usr/local/lib/python3.7/dist-packages (from panel>=0.13.1->holoviews) (3.4.1)
Requirement already satisfied: bleach in /usr/local/lib/python3.7/dist-packages (from panel>=0.13.1->holoviews) (5.0.1)
Requirement already satisfied: tqdm>=4.48.0 in /usr/local/lib/python3.7/dist-packages (from panel>=0.13.1->holoviews) (4.64.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->datashader) (1.15.0)
Requirement already satisfied: webencodings in /usr/local/lib/python3.7/dist-packages (from bleach->panel>=0.13.1->holoviews) (0.5.1)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from dask->datashader) (1.5.0)
Requirement already satisfied: fsspec>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from dask->datashader) (2022.11.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.7/dist-packages (from dask->datashader) (1.3.0)
Requirement already satisfied: locket in /usr/local/lib/python3.7/dist-packages (from partd>=0.3.10->dask->datashader) (1.0.0)
Requirement already satisfied: multipledispatch>=0.4.7 in /usr/local/lib/python3.7/dist-packages (from datashape->datashader) (0.6.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->numba>=0.51->datashader) (3.10.0)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->datashader) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->datashader) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->datashader) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->datashader) (2022.9.24)
Building wheels for collected packages: datashape
Building wheel for datashape (setup.py) ... done
Created wheel for datashape: filename=datashape-0.5.2-py3-none-any.whl size=59439 sha256=0824bb5e867aaa305ee72a61c2e80f8f32d101cad679fdae332973f1592fae15
Stored in directory: /root/.cache/pip/wheels/b5/b7/80/333a5c3312ed4cd54f5d5b869868c14e0c6002cb5c7238b52d
Successfully built datashape
Installing collected packages: bokeh, panel, datashape, holoviews, datashader
Attempting uninstall: bokeh
Found existing installation: bokeh 2.3.3
Uninstalling bokeh-2.3.3:
Successfully uninstalled bokeh-2.3.3
Attempting uninstall: panel
Found existing installation: panel 0.12.1
Uninstalling panel-0.12.1:
Successfully uninstalled panel-0.12.1
Attempting uninstall: holoviews
Found existing installation: holoviews 1.14.9
Uninstalling holoviews-1.14.9:
Successfully uninstalled holoviews-1.14.9
Successfully installed bokeh-2.4.3 datashader-0.14.3 datashape-0.5.2 holoviews-1.15.2 panel-0.14.1
%matplotlib inline
import pandas as pd
import datashader as ds
from datashader import transfer_functions as tf
from colorcet import fire
Because the dataset is pretty big, let's use a small sample first. For this visualization, we only keep the dropoff location.
Usually, we use remotezip package in Python to download and extract the big dataset. But one of the problem with remotezip is that it does not support range request and that is why we have to download the dataset manually. We suggest you to download the zip file of dataset containing csv from Kaggle dataset, extract it and put the filepath of CSV file in the csv_path variable below.
from google.colab import drive
drive.mount('/content/drive')
dataCorePath = '/content/drive/MyDrive/Adc/'
Mounted at /content/drive
# read the dataset using the compression zip
#df = pd.read_csv(dataCorePath+'nyc_taxi_data_2014.csv.gz',compression='gzip')
#df.head()
#csv_path= '<<INSERT nyc_taxi_data_2014.csv file path>>'
try:
nyctaxi_small = pd.read_csv(dataCorePath+'nyc_taxi_data_2014.csv.gz',compression='gzip', nrows=10000,
usecols=['dropoff_longitude', 'dropoff_latitude'])
except:
print("Dataset URL is not correct or not defined:")
print("Creating dummy dataset so that code won't break but for assignment, you must use actual dataset.")
nyctaxi_small = pd.DataFrame({"dropoff_longitude": [-73, -74], "dropoff_latitude": [40, 41]})
nyctaxi_small.head()
| dropoff_longitude | dropoff_latitude | |
|---|---|---|
| 0 | -73.982227 | 40.731790 |
| 1 | -73.960449 | 40.763995 |
| 2 | -73.986626 | 40.765217 |
| 3 | -73.979863 | 40.777050 |
| 4 | -73.984367 | 40.720524 |
Although the dataset is different, we can still follow the example here: https://datashader.org/getting_started/Introduction.html
agg = ds.Canvas().points(nyctaxi_small, 'dropoff_longitude', 'dropoff_latitude')
tf.set_background(tf.shade(agg, cmap=fire),"black")
Why can't we see anything? Wait, do you see the small dots on the left top? Can that be New York City? Maybe we don't see anything because some people travel very far? or because the dataset has some missing data?
Q: Can you first check whether there are NaNs? Then drop them and draw the map again?
# Implement: Check whether we have NaNs
print(nyctaxi_small.isnull().values.any())
True
# Implement: drop the rows with NaN and then draw the map again.
nyctaxi_small = nyctaxi_small.dropna()
agg = ds.Canvas().points(nyctaxi_small, 'dropoff_longitude', 'dropoff_latitude')
tf.set_background(tf.shade(agg, cmap=fire),"black")
So it's not about the missing data.
Q: Can you identify the issue and draw the map like the following?
hint: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.between.html and histograms may be helpful.
nyctaxi_small.hist(bins=5)
array([[<matplotlib.axes._subplots.AxesSubplot object at 0x7f6aaf200c10>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f6ab2203510>]],
dtype=object)
print(nyctaxi_small.columns)
print(nyctaxi_small['dropoff_latitude'].max(), nyctaxi_small['dropoff_latitude'].min())
print(nyctaxi_small['dropoff_longitude'].max(), nyctaxi_small['dropoff_longitude'].min())
newdropOffLatRange = nyctaxi_small['dropoff_latitude'].loc[nyctaxi_small['dropoff_latitude'].between(10, 42)]
newdropOffLngRange = nyctaxi_small['dropoff_longitude'].loc[nyctaxi_small['dropoff_longitude'].between(-74.5, -10)]
print(pd.DataFrame(newdropOffLatRange, newdropOffLngRange))
Index(['dropoff_longitude', 'dropoff_latitude'], dtype='object')
41.071847 0.0
0.0 -74.36862499999998
dropoff_latitude
dropoff_longitude
-73.982227 NaN
-73.960449 NaN
-73.986626 NaN
-73.979863 NaN
-73.984367 NaN
... ...
-73.990344 NaN
-73.975818 NaN
-73.973647 NaN
-73.973057 NaN
-73.975406 NaN
[9817 rows x 1 columns]
print(nyctaxi_small.loc[(nyctaxi_small['dropoff_latitude'] < 40 )])
print(nyctaxi_small.loc[(nyctaxi_small['dropoff_longitude'] > -70 )])
dropoff_longitude dropoff_latitude
85 0.0 0.0
91 0.0 0.0
140 0.0 0.0
164 0.0 0.0
213 0.0 0.0
... ... ...
9880 0.0 0.0
9907 0.0 0.0
9911 0.0 0.0
9946 0.0 0.0
9981 0.0 0.0
[182 rows x 2 columns]
dropoff_longitude dropoff_latitude
85 0.0 0.0
91 0.0 0.0
140 0.0 0.0
164 0.0 0.0
213 0.0 0.0
... ... ...
9880 0.0 0.0
9907 0.0 0.0
9911 0.0 0.0
9946 0.0 0.0
9981 0.0 0.0
[182 rows x 2 columns]
newdropOffLatRange = nyctaxi_small['dropoff_latitude'].loc[(nyctaxi_small['dropoff_latitude'] > 40 )]
newdropOffLngRange = nyctaxi_small['dropoff_longitude'].loc[(nyctaxi_small['dropoff_longitude'] < -70 )]
# Implement. You can use multiple cells to figure out what's going on.
# TODO: Once you figure it out, Replace below dummy value of df nyctaxi_small_filtered with correct value where the issue is resolved
nyctaxi_small_filtered = pd.DataFrame({"dropoff_longitude": newdropOffLngRange, "dropoff_latitude": newdropOffLatRange})
nyctaxi_small_filtered
| dropoff_longitude | dropoff_latitude | |
|---|---|---|
| 0 | -73.982227 | 40.731790 |
| 1 | -73.960449 | 40.763995 |
| 2 | -73.986626 | 40.765217 |
| 3 | -73.979863 | 40.777050 |
| 4 | -73.984367 | 40.720524 |
| ... | ... | ... |
| 9995 | -73.990344 | 40.739028 |
| 9996 | -73.975818 | 40.763701 |
| 9997 | -73.973647 | 40.787233 |
| 9998 | -73.973057 | 40.751087 |
| 9999 | -73.975406 | 40.752156 |
9817 rows × 2 columns
agg = ds.Canvas().points(nyctaxi_small_filtered, 'dropoff_longitude', 'dropoff_latitude')
tf.set_background(tf.shade(agg, cmap=fire), "black")
Do you see the black empty space at the center? That looks like the Central Park. This is cool, but it'll be awesome if we can explore the data interactively.
Q. Ok, now let's get serious by loading the whole dataset. It may take some time. Apply the same data cleaning procedure.
# Implement
#loading dataset
nyctaxi_complete = pd.read_csv(dataCorePath+'nyc_taxi_data_2014.csv.gz',compression='gzip', usecols=['dropoff_longitude', 'dropoff_latitude'])
#filtering Data && rebuilding dataframe
nyctaxi_complete = nyctaxi_complete.loc[(nyctaxi_complete['dropoff_longitude'].between(-74.07, -73.8)) & (nyctaxi_complete['dropoff_latitude'].between(40.58, 40.92))]
nyctaxi_complete.hist()
array([[<matplotlib.axes._subplots.AxesSubplot object at 0x7f6a9b3f5f10>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f6a9b384d90>]],
dtype=object)
Can you feed the data directly to datashader to reproduce the static plot, this time with the full data?
#Plot
agg = ds.Canvas().points(nyctaxi_complete, 'dropoff_longitude', 'dropoff_latitude')
tf.set_background(tf.shade(agg, cmap=fire), "black")
Wow, that's fast. Also it looks cool!
Let's try the interactive version from here: https://datashader.org/getting_started/Introduction.html
import holoviews as hv
from holoviews.element.tiles import EsriImagery
from holoviews.operation.datashader import datashade
hv.extension('bokeh')
map_tiles = EsriImagery().opts(alpha=0.5, width=900, height=480, bgcolor='black')
points = hv.Points(nyctaxi_small_filtered, ['dropoff_longitude', 'dropoff_latitude'])
taxi_trips = datashade(points, x_sampling=1, y_sampling=1, cmap=fire, width=900, height=480)
map_tiles * taxi_trips
Why does it say "map data not yet available"? The reason is the difference between two coordinate systems. If you google this error message, you can find https://stackoverflow.com/questions/44487898/map-background-with-datashader-map-data-not-yet-available.
You can use datashader.utils.lnglat_to_meters to convert your latitudes and longitudes to a format that holoviews understands. More on this here: https://datashader.org/user_guide/Geography.html
Q: Can you draw an interactive map by converting the lnglat data to x, y coordinate explained above?
# Implement
import datashader as ds
nyctaxi_complete.loc[:, 'dropoff_longitude'], nyctaxi_complete.loc[:, 'dropoff_latitude'] = ds.utils.lnglat_to_meters(nyctaxi_complete.dropoff_longitude,nyctaxi_complete.dropoff_latitude)
print(nyctaxi_complete)
dropoff_longitude dropoff_latitude 0 -8.235664e+06 4.972861e+06 1 -8.233240e+06 4.977593e+06 2 -8.236154e+06 4.977773e+06 3 -8.235401e+06 4.979512e+06 4 -8.235902e+06 4.971206e+06 ... ... ... 14999993 -8.235385e+06 4.977925e+06 14999994 -8.237717e+06 4.971972e+06 14999995 -8.236672e+06 4.967096e+06 14999997 -8.238305e+06 4.969796e+06 14999998 -8.234571e+06 4.975164e+06 [14267630 rows x 2 columns]
hv.extension('bokeh')
map_tiles = EsriImagery().opts(alpha=0.5, width=900, height=480, bgcolor='black')
points = hv.Points(nyctaxi_small_filtered, ['dropoff_longitude', 'dropoff_latitude'])
taxi_trips = datashade(points, x_sampling=1, y_sampling=1, cmap=fire, width=900, height=480)
map_tiles * taxi_trips
It's interactive! Actually, if you are running a bokeh server and there is a live python process, the map quickly refreshes and show more details as you zoom.
Q: how many rows (data points) are we visualizing right now?
# figure it out
print(len(nyctaxi_complete))
14267630
That's a lot of data points. If we are using a vector format, it is probably hopeless to expect any interactivity because you need to move that many points! Yet, datashader + holoviews + bokeh renders everything almost in real time!
Another useful tool is Leaflet. It allows you to use various map tile data (Google maps, Open streetmap, ...) with many types of marks (points, heatmap, etc.). Leaflet.js is one of the easiest options to do that on the web, and there is a Python bridge of it: https://github.com/jupyter-widgets/ipyleaflet. Although we will not go into details, it's certainly something that's worth checking out if you're using geographical data.